This was happening for xmalloc request sizes between 3921 and 3951
bytes. The reason being that xmem_pool_alloc() may add extra padding
to the requested size, making the total block size greater than a
page.
Rather than add yet more smarts about TLSF to _xmalloc(), we just
dumbly attempt any request smaller than a page via xmem_pool_alloc()
first, then fall back on xmalloc_whole_pages() if this fails.
Based on bug diagnosis and initial patch by John Byrne <john.l.byrne@hp.com>
Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
void *_xmalloc(unsigned long size, unsigned long align)
{
- void *p;
+ void *p = NULL;
u32 pad;
ASSERT(!in_irq());
if ( !xenpool )
tlsf_init();
- if ( size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
- p = xmalloc_whole_pages(size);
- else
+ if ( size < PAGE_SIZE )
p = xmem_pool_alloc(size, xenpool);
+ if ( p == NULL )
+ p = xmalloc_whole_pages(size);
/* Add alignment padding. */
if ( (pad = -(long)p & (align - 1)) != 0 )
ASSERT(!(b->size & 1));
}
- if ( b->size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+ if ( b->size >= PAGE_SIZE )
free_xenheap_pages((void *)b, get_order_from_bytes(b->size));
else
xmem_pool_free(p, xenpool);